Multimodal Annotations in Gesture and Sign Language Studies
نویسندگان
چکیده
For multimodal annotations an exhaustive encoding system for gestures was developed to facilitate research. The structural requirements of multimodal annotations were analyzed to develop an Abstract Corpus Model which is the basis for a powerful annotation and exploitation tool for multimedia recordings and the definition of the XML-based EUDICO Annotation Format. Finally, a metadata-based data management environment has been setup to facilitate resource discovery and especially corpus management. Bt means of an appropriate digitization policy and their online availability researchers have been able to build up a large corpus covering gesture and sign language data.
منابع مشابه
Applying mean shift and motion detection approaches to hand tracking in sign language
Hand gesture recognition is very important to communicate in sign language. In this paper, an effective object tracking and hand gesture recognition method is proposed. This method is combination of two well-known approaches, the mean shift and the motion detection algorithm. The mean shift algorithm can track objects based on the color, then when hand passes the face occlusion happens. Several...
متن کاملContributions of Sign Language Research to Gesture Understanding: What Can Multimodal Computational Systems Learn from Sign Language Research
This paper considers neurological, formational and functional similarities between gestures and signed verb predicates. From analysis of verb sign movement, we offer suggestions for analyzing gestural movement (motion capture, kinematic analysis, trajectory internal structure). From analysis of verb sign distinctions, we offer suggestions for analyzing co-speech gesture functions.
متن کاملProposal for a Deep Learning Architecture for Activity Recognition
Activity recognition from computer vision plays an important role in research towards applications like human computer interfaces, intelligent environments, surveillance or medical systems. In this paper, we propose a gesture recognition system based on a deep learning architecture and show how it performs when trained with changing multimodal input data on an Italian sign language dataset. The...
متن کاملAmerican Sign Language Generation: Multimodal NLG with Multiple Linguistic Channels
Software to translate English text into American Sign Language (ASL) animation can improve information accessibility for the majority of deaf adults with limited English literacy. ASL natural language generation (NLG) is a special form of multimodal NLG that uses multiple linguistic output channels. ASL NLG technology has applications for the generation of gesture animation and other communicat...
متن کاملAnnotation of Sign and Gesture Cross-linguistically
This paper discusses the construction of a cross-linguistic, bimodal corpus containing three modes of expression: expressions from two sign languages, speech and gestural expressions in two spoken languages and pantomimic expressions by users of two spoken languages who are requested to convey information without speaking. We discuss some problems and tentative solutions for the annotation of u...
متن کامل